Goto

Collaborating Authors

 scalable bayesian learning


Towards Scalable Bayesian Learning of Causal DAGs

Neural Information Processing Systems

We give methods for Bayesian inference of directed acyclic graphs, DAGs, and the induced causal effects from passively observed complete data. Our methods build on a recent Markov chain Monte Carlo scheme for learning Bayesian networks, which enables efficient approximate sampling from the graph posterior, provided that each node is assigned a small number K of candidate parents. We present algorithmic techniques to significantly reduce the space and time requirements, which make the use of substantially larger values of K feasible. Furthermore, we investigate the problem of selecting the candidate parents per node so as to maximize the covered posterior mass. Finally, we combine our sampling method with a novel Bayesian approach for estimating causal effects in linear Gaussian DAG models. Numerical experiments demonstrate the performance of our methods in detecting ancestor-descendant relations, and in causal effect estimation our Bayesian method is shown to outperform previous approaches.


Review for NeurIPS paper: Towards Scalable Bayesian Learning of Causal DAGs

Neural Information Processing Systems

Weaknesses: The novelty of the paper is very limited. The ais authors concentrate on computational tricks, tries to improve the scalability of the algorithm. And they achieve some success. However, for NIPS paper I would expect not only to improve implementation of the algorithm but also some new concepts. I do not found any new ideas in that sense.


Review for NeurIPS paper: Towards Scalable Bayesian Learning of Causal DAGs

Neural Information Processing Systems

This paper presents a collection of useful tricks to speed up Bayesian computations for causal discovery algorithms. Despite some concerns regarding novelty, all reviewers agreed that this paper is well-written and could help spur interest and further developments in Bayesian algorithms for BNSL.


Towards Scalable Bayesian Learning of Causal DAGs

Neural Information Processing Systems

We give methods for Bayesian inference of directed acyclic graphs, DAGs, and the induced causal effects from passively observed complete data. Our methods build on a recent Markov chain Monte Carlo scheme for learning Bayesian networks, which enables efficient approximate sampling from the graph posterior, provided that each node is assigned a small number K of candidate parents. We present algorithmic techniques to significantly reduce the space and time requirements, which make the use of substantially larger values of K feasible. Furthermore, we investigate the problem of selecting the candidate parents per node so as to maximize the covered posterior mass. Finally, we combine our sampling method with a novel Bayesian approach for estimating causal effects in linear Gaussian DAG models.